分子或材料的电子密度最近作为机器学习模型的目标数量受到了主要关注。一种自然选择,用于构建可传递可转移和线性缩放预测的模型是使用类似于通常用于密度拟合近似值的常规使用的原子基础来表示标量场。但是,基础的非正交性对学习练习构成了挑战,因为它需要立即考虑所有原子密度成分。我们设计了一种基于梯度的方法,可以直接在优化且高度稀疏的特征空间中最大程度地减少回归问题的损失函数。这样,我们克服了与采用以原子为中心的模型相关的限制,以在任意复杂的数据集上学习电子密度,从而获得极为准确的预测。增强的框架已在32个液体水的32个周期细胞上进行测试,具有足够的复杂性,需要在准确性和计算效率之间取得最佳平衡。我们表明,从预测的密度开始,可以执行单个Kohn-Sham对角度步骤,以访问总能量组件,而总能量组件仅针对参考密度函数计算,而误差仅为0.1 MEV/ATOM。最后,我们测试了高度异构QM9基准数据集的方法,这表明训练数据的一小部分足以在化学精度内得出地面总能量。
translated by 谷歌翻译
对以联邦学习(FL)的名义进行的分布式优化框架越来越感兴趣。特别是,在通信资源(例如,带宽)和数据分布方面,网络非常异质的情况下,网络是强烈的。在这些情况下,本地机器(代理)和中央服务器(主)之间的通信是主要考虑因素。在这项工作中,我们提出了棚屋,这是一种原始的通信限制在这种异质场景中旨在加速FL的牛顿型(NT)算法。棚子是通过设计强大到非i.i.d.数据分布,处理代理通信资源的异质性(CRS),仅需要零星的Hessian计算,并实现超级线性收敛。这是可能的,这是基于当地Hessian矩阵的特征分配的增量策略,该矩阵(可能)(可能)过时的二阶信息。通过评估(i)收敛所需的通信回合的数量,(ii)传输的数据总量以及(iii)本地Hessian计算的数量,可以在实际数据集上进行彻底验证所提出的解决方案。对于所有这些指标,提出的方法显示出对巨人和FedNL等最新技术的卓越性能。
translated by 谷歌翻译
低成本毫米波(MMWAVE)通信和雷达设备的商业可用性开始提高消费市场中这种技术的渗透,为第五代(5G)的大规模和致密的部署铺平了道路(5G) - 而且以及6G网络。同时,普遍存在MMWAVE访问将使设备定位和无设备的感测,以前所未有的精度,特别是对于Sub-6 GHz商业级设备。本文使用MMWAVE通信和雷达设备在基于设备的定位和无设备感应中进行了现有技术的调查,重点是室内部署。我们首先概述关于MMWAVE信号传播和系统设计的关键概念。然后,我们提供了MMWaves启用的本地化和感应方法和算法的详细说明。我们考虑了在我们的分析中的几个方面,包括每个工作的主要目标,技术和性能,每个研究是否达到了一定程度的实现,并且该硬件平台用于此目的。我们通过讨论消费者级设备的更好算法,密集部署的数据融合方法以及机器学习方法的受过教育应用是有前途,相关和及时的研究方向的结论。
translated by 谷歌翻译
社会偏移和温度筛选已被广泛用于抵消Covid-19大流行,从全世界的学术界,工业和公共主管部门引发极大的兴趣。虽然大多数解决方案分别处理了这些方面,但它们的组合将极大地利用对公共空间的持续监测,并有助于触发有效的对策。这项工作介绍了毫米杀虫雷达和红外成像传感系统,在室内空间中进行了不引人注目的和隐私,在室内空间中进行了不显眼和隐私。 Millitrace-IR通过强大的传感器融合方法,MM波雷达和红外热摄像机结合。它通过在热摄像机图像平面和雷达参考系统中的人体运动中共同跟踪受试者的面,实现了偏移和体温的完全自动测量。此外,毫米itrace-IR执行接触跟踪:热相机传感器可靠地检测体温高的人,随后通过雷达以非侵入方式追踪大型室内区域。进入新房间时,通过深神经网络从雷达反射计算与雷达反射的步态相关的特征,并使用加权的极端学习机作为最终重新识别工具,在其他人之间重新识别一个主题。从实际实施中获得的实验结果,从毫米 - IR的实际实施中展示了距离/轨迹估计的排入量级精度,个人间距离估计(对受试者接近0.2米的受试者有效),以及精确的温度监测(最大误差0.5 {\ deg} c)。此外,毫米itrace-IR通过高精度(95%)的人重新识别,在不到20秒内提供接触跟踪。
translated by 谷歌翻译
In this article we present SHARP, an original approach for obtaining human activity recognition (HAR) through the use of commercial IEEE 802.11 (Wi-Fi) devices. SHARP grants the possibility to discern the activities of different persons, across different time-spans and environments. To achieve this, we devise a new technique to clean and process the channel frequency response (CFR) phase of the Wi-Fi channel, obtaining an estimate of the Doppler shift at a radio monitor device. The Doppler shift reveals the presence of moving scatterers in the environment, while not being affected by (environment-specific) static objects. SHARP is trained on data collected as a person performs seven different activities in a single environment. It is then tested on different setups, to assess its performance as the person, the day and/or the environment change with respect to those considered at training time. In the worst-case scenario, it reaches an average accuracy higher than 95%, validating the effectiveness of the extracted Doppler information, used in conjunction with a learning algorithm based on a neural network, in recognizing human activities in a subject and environment independent way. The collected CFR dataset and the code are publicly available for replicability and benchmarking purposes.
translated by 谷歌翻译
Existing automated techniques for software documentation typically attempt to reason between two main sources of information: code and natural language. However, this reasoning process is often complicated by the lexical gap between more abstract natural language and more structured programming languages. One potential bridge for this gap is the Graphical User Interface (GUI), as GUIs inherently encode salient information about underlying program functionality into rich, pixel-based data representations. This paper offers one of the first comprehensive empirical investigations into the connection between GUIs and functional, natural language descriptions of software. First, we collect, analyze, and open source a large dataset of functional GUI descriptions consisting of 45,998 descriptions for 10,204 screenshots from popular Android applications. The descriptions were obtained from human labelers and underwent several quality control mechanisms. To gain insight into the representational potential of GUIs, we investigate the ability of four Neural Image Captioning models to predict natural language descriptions of varying granularity when provided a screenshot as input. We evaluate these models quantitatively, using common machine translation metrics, and qualitatively through a large-scale user study. Finally, we offer learned lessons and a discussion of the potential shown by multimodal models to enhance future techniques for automated software documentation.
translated by 谷歌翻译
According to the latest trend of artificial intelligence, AI-systems needs to clarify regarding general,specific decisions,services provided by it. Only consumer is satisfied, with explanation , for example, why any classification result is the outcome of any given time. This actually motivates us using explainable or human understandable AI for a behavioral mining scenario, where users engagement on digital platform is determined from context, such as emotion, activity, weather, etc. However, the output of AI-system is not always systematically correct, and often systematically correct, but apparently not-perfect and thereby creating confusions, such as, why the decision is given? What is the reason underneath? In this context, we first formulate the behavioral mining problem in deep convolutional neural network architecture. Eventually, we apply a recursive neural network due to the presence of time-series data from users physiological and environmental sensor-readings. Once the model is developed, explanations are presented with the advent of XAI models in front of users. This critical step involves extensive trial with users preference on explanations over conventional AI, judgement of credibility of explanation.
translated by 谷歌翻译
Enterprise resource planning (ERP) software brings resources, data together to keep software-flow within business processes in a company. However, cloud computing's cheap, easy and quick management promise pushes business-owners for a transition from monolithic to a data-center/cloud based ERP. Since cloud-ERP development involves a cyclic process, namely planning, implementing, testing and upgrading, its adoption is realized as a deep recurrent neural network problem. Eventually, a classification algorithm based on long short term memory (LSTM) and TOPSIS is proposed to identify and rank, respectively, adoption features. Our theoretical model is validated over a reference model by articulating key players, services, architecture, functionalities. Qualitative survey is conducted among users by considering technology, innovation and resistance issues, to formulate hypotheses on key adoption factors.
translated by 谷歌翻译
Mixtures of von Mises-Fisher distributions can be used to cluster data on the unit hypersphere. This is particularly adapted for high-dimensional directional data such as texts. We propose in this article to estimate a von Mises mixture using a l 1 penalized likelihood. This leads to sparse prototypes that improve clustering interpretability. We introduce an expectation-maximisation (EM) algorithm for this estimation and explore the trade-off between the sparsity term and the likelihood one with a path following algorithm. The model's behaviour is studied on simulated data and, we show the advantages of the approach on real data benchmark. We also introduce a new data set on financial reports and exhibit the benefits of our method for exploratory analysis.
translated by 谷歌翻译
In this work, we introduce a hypergraph representation learning framework called Hypergraph Neural Networks (HNN) that jointly learns hyperedge embeddings along with a set of hyperedge-dependent embeddings for each node in the hypergraph. HNN derives multiple embeddings per node in the hypergraph where each embedding for a node is dependent on a specific hyperedge of that node. Notably, HNN is accurate, data-efficient, flexible with many interchangeable components, and useful for a wide range of hypergraph learning tasks. We evaluate the effectiveness of the HNN framework for hyperedge prediction and hypergraph node classification. We find that HNN achieves an overall mean gain of 7.72% and 11.37% across all baseline models and graphs for hyperedge prediction and hypergraph node classification, respectively.
translated by 谷歌翻译